16 research outputs found
A Multilevel Introspective Dynamic Optimization System For Holistic Power-Aware Computing
Power consumption is rapidly becoming the dominant limiting factor for
further improvements in computer design. Curiously, this applies both
at the "high end" of workstations and servers and the "low end" of
handheld devices and embedded computers. At the high-end, the
challenge lies in dealing with exponentially growing power
densities. At the low-end, there is a demand to make mobile devices
more powerful and longer lasting, but battery technology is not
improving at the same
rate that power consumption is rising. Traditional power-management
research is fragmented; techniques are being developed at specific
levels, without fully exploring their synergy with other levels.
Most software techniques target either operating systems or
compilers but do not explore the interaction between the two
layers. These techniques also have not fully explored the potential
of virtual machines for power management.
In contrast, we are developing
a system that integrates information from multiple levels of software
and hardware, connecting these levels through a communication
channel. At the heart of this
system are a virtual machine that compiles and dynamically profiles
code, and an optimizer that reoptimizes
all code, including that of applications and the virtual machine itself.
We believe this introspective, holistic approach
enables more informed power-management decisions
A new way of estimating compute-boundedness and its application to dynamic voltage scaling
Abstract: Many recent dynamic voltage-scaling (DVS) algorithms use hardware events (such as cache misses, memory bus transactions, or instruction execution rates) as the basis for deciding how much a program region can be slowed down with acceptable performance loss. Although these approaches result in power savings, the hardware events measured are at best indirectly related to execution time and clock frequency. We propose a new metric for evaluating the performance loss caused by DVS, a metric that is logically related to clock frequency and execution time, namely the percentage drop in cycles. Further, we show that we can predict with high accuracy the execution time of a code region at any clock frequency after measuring the total number of cycles spent in that region for two clock frequencies—the maximum and the second highest clock frequency. Measurements using several real-world applications show that this “two-point ” model predicts execution times with an accuracy that is greater than 95 % in many cases. This result can be used to develop low-overhead DVS algorithms that are more system-aware than many of the current algorithms, whic
A new way of estimating compute boundedness and its application to dynamic voltage scaling
Abstract: Many recent dynamic voltage-scaling (DVS) algorithms use hardware events (such as cache misses, memory bus transactions, or instruction execution rates) as the basis for deciding how much a program region can be slowed down with acceptable performance loss. Although these approaches result in power savings, the hardware events measured are at best indirectly related to execution time and clock frequency. We propose a new metric for evaluating the performance loss caused by DVS, a metric that is logically related to clock frequency and execution time, namely the percentage drop in cycles. Further, we show that we can predict with high accuracy the execution time of a code region at any clock frequency after measuring the total number of cycles spent in that region for two clock frequencies-the maximum and the second highest clock frequency. Measurements using several real-world applications show that this "two-point" model predicts execution times with an accuracy that is greater than 95% in many cases. This result can be used to develop low-overhead DVS algorithms that are more system-aware than many of the current algorithms, which rely on measuring indirect effects
Preliminary phytochemical screenings of marine alga ulva fasciata and its growth performance, biochemical composition on indian major carp cirrhinus mrigala fingerlings
To screening the preliminary phytochemical compounds of Ulva fasciata collected from Mandapam, south east coast, Tamil Nadu, India. U. fasciata, it was subjected to petroleum ether, chloroform, acetone and ethanol extractions. Totally 10 compounds were present in it, such as alkaloids, terpenoids, flavonoids, phenolics, saponins, cardiac glycosides, sterols, quinones and reducing sugar were present in the extract of U. fasciata. Among these, alkaloids, phenolics, quinones, and flavonoids were reducing sugar were luxuriant presence of U. fasciata extract. Basal diet prepared by replacing the fish meal with the U. fasciata at 1, 3 and 5% along with groundnut oilcake and soy bean meal (as protein source), wheat bran and sun flavor oil (as carbohydrate and lipid sources respectively), and tapioca flour and egg albumin as binding agents, and fed to Cirrhinus mrigala fingerlings for 60 days. The artificial feed formulated without incorporation of U. fasciata was served as control. Among the three ratio of U. fasciata incorporated diets, 5% of feed fed C. mrigala fingerlings showed the best (P<0.05) survival and growth performance including weight gain (WG), muscle basic biochemical constituents (total protein, carbohydrate, lipid and ash). Moreover, 5% of U. fasciata incorporated diet can also be considered as a good nutritional supplement. Therefore, the present work recommends incorporation of U. fasciata as feed additive to achieve sustainable production of C. mrigala culture
Recommended from our members
Virtual-machine driven dynamic voltage scaling
In current DVS approaches, voltage scaling decisions are made statically at compile time, and/or dynamically at the OS level. While this has yielded excellent results for a wide range of applications, there is an even better solution for platform independent code (such as Java bytecode) that executes on virtual machines. Such virtual machines have fine-grained execution information about the actual workloads that run on them, as opposed to static compilers that at best have off-line profiling data from previous workloads. Based on their high-level model of the actual workload, virtual machines can make DVS decisions with high precision